Simple MCP demo.ipynb•37.5 kB
{
"cells": [
{
"cell_type": "markdown",
"id": "5dafe599",
"metadata": {},
"source": [
"# Simple MCP demo\n",
"- Send a query to LLM, says it doesn't know\n",
"- Give it a tool to help, and it knows!<br /> <br />\n",
"\n",
"- Model Context Protocol (MCP) is like a USB standard for LLM/tool integration: it lets you plug 3rd party tools into your AI, letting e.g. a desktop client make Plotly charts or request additional info from Wikipedia. <br /> <br />\n",
"\n",
"- LLMs on their owns are like librarians, you can talk to them, they can understand you at some level and give you information back. With MCP and tools, they gain access to more information, can take actions, potentially including internal network apps and data, and can become more like personal assistants.<br /> <br />\n",
"\n",
"- MCP has 3 components:\n",
" - An MCP client which initiates a conversation and requests to the LLM and MCP server. MCP client is for example a chat client (Claude Desktop is great) or an IDE (Cursor, Windsurf) that is asking an LLM for help with code\n",
" - An MCP server which provides tools for a purpose - A Fetch or Playwright tool to browse the Web, or a Context7 tool to look up Python module documentation in a vector DB\n",
" - An LLM which supports tool use - any major LLM that follows the OpenAI tool use standard <br /> <br />\n",
"\n",
"- MCP Flow:\n",
" - User launches MCP client, it connects to MCP server(s) per its configuration.\n",
" - MCP client asks MCP server to list the tools it offers to the client.\n",
" - MCP client can prompt the LLM, providing a list of available tools (calling signatures, and semantic descriptions of when to call them).\n",
" - LLM responds to prompt. If, based on the prompt and the available tools, a tool would be the best way to answer the question, LLM will respond with a tool call request and the parameter values for the calling signature.\n",
" - MCP client then calls the tools using the provided signature, adds the output from the tool to the conversation, and calls the LLM again with the updated conversation.\n",
" - LLM may respond with further tool call requests, or provide a response.\n",
" - That's mostly it. Besides executing tool calls, servers can also provide static resources for the client like docs, reference prompts, see the [docs on the home page](https://modelcontextprotocol.io/overview) <br /> <br />\n",
"\n",
"- > 10,000 MCP servers available \n",
" - [MCP Market leaderboard (by GitHub stars)](https://mcpmarket.com/leaderboards)\n",
" - [PulseMCP directory (by downloads)](https://www.pulsemcp.com/servers?sort=popular-30-days-desc)\n",
" - [Smithery](https://smithery.ai/)\n",
" - [LobeHub](https://lobehub.com/mcp)\n",
" - [Glama](https://glama.ai/mcp/servers)<br /> <br />\n",
"\n",
"- Without even coding, you can connect a client like Claude Desktop to them via configurations, and you get reasoning models + deep research + actions, which can be a force multiplier for analysts and knowledge workers.<br /> <br />\n",
"\n",
"- In the example below, we make an MCP server in a few lines of code to answer Monty Python's most famous question, and insert some pdb breakpoints so we can step through the flow. <br /> <br />\n",
"\n",
"\n",
"- More info:\n",
" - [Anthropic MCP Announcement](https://www.anthropic.com/news/model-context-protocol)\n",
" - [Anthropic YouTube talk](https://www.youtube.com/watch?v=kQmXtrmQ5Zg)\n",
" - [Model Context Protocol home page on GitHub](https://github.com/modelcontextprotocol)\n",
" - [Composio intro](https://composio.dev/blog/what-is-model-context-protocol-mcp-explained)<br /> <br />\n",
" \n",
" \n",
" "
]
},
{
"cell_type": "markdown",
"id": "e54daae7",
"metadata": {},
"source": [
"\n"
]
},
{
"cell_type": "markdown",
"id": "34a5aa95",
"metadata": {},
"source": [
"# Example Code"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "d71cafca",
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"import os\n",
"import dotenv\n",
"import re\n",
"from datetime import datetime, timedelta\n",
"import time\n",
"from typing import Dict, Any, Optional, Annotated\n",
"from urllib.parse import urljoin, urlparse\n",
"\n",
"import asyncio\n",
"import nest_asyncio\n",
"\n",
"from contextlib import AsyncExitStack\n",
"import mcp\n",
"from mcp.client.stdio import stdio_client\n",
"from mcp import ClientSession, StdioServerParameters\n",
"\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_core.messages import SystemMessage, HumanMessage\n",
"\n",
"import anthropic\n",
"from anthropic import Anthropic\n",
"import pdb\n"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "174dc59a",
"metadata": {},
"outputs": [],
"source": [
"# load secrets from .env including API keys\n",
"dotenv.load_dotenv()\n",
"\n",
"# enable asyncio in jupyter notebook\n",
"nest_asyncio.apply()\n",
"\n",
"# Initialize plotly for Jupyter\n",
"# init_notebook_mode(connected=True)\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "de8a6078",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Available Claude models:\n",
"claude-opus-4-20250514\n",
"claude-sonnet-4-20250514\n",
"claude-3-5-haiku-20241022\n",
"\n",
"✓ claude-opus-4-20250514\n",
"The airspeed velocity of an unladen swallow depends on whether you mean an African or European swallow!\n",
"\n",
"- **European swallow**: approximately 20.1 mph (32.4 km/h)\n",
"- **African swallow**: approximately 24 mph (38.6 km/h)\n",
"\n",
"This is, of course, a reference to the famous scene from \"Monty Python and the Holy Grail\" where the Bridge Keeper asks this seemingly impossible question. In the film, when King Arthur responds \"What do you mean? An African or European swallow?\" the Bridge Keeper doesn't know and is thrown into the gorge.\n",
"\n",
"While the Monty Python sketch treats this as an absurd, unanswerable question, actual estimates have been made based on real swallow flight patterns and speeds!\n",
"\n",
"✓ claude-sonnet-4-20250514\n",
"Ah, a classic Monty Python reference! \n",
"\n",
"But you have to be more specific - are you referring to an African or European swallow?\n",
"\n",
"In all seriousness, since you didn't specify, I'll give you both:\n",
"\n",
"- **European swallow** (barn swallow): cruising speed of about 17-20 mph (27-32 km/h), with bursts up to 35 mph (56 km/h)\n",
"- **African swallow** (red-rumped swallow): similar speeds, roughly 20-24 mph (32-38 km/h)\n",
"\n",
"Of course, in the Holy Grail, the correct response to not knowing the answer is to be cast into the gorge of eternal peril! 🏰\n",
"\n",
"✓ claude-3-5-haiku-20241022\n",
"Ah, a classic reference to the movie \"Monty Python and the Holy Grail\"! In the film, when King Arthur is questioned about the airspeed velocity of an unladen swallow, he cannot provide a definitive answer. \n",
"\n",
"In reality, the airspeed velocity would depend on whether you're talking about an African or European swallow, as they have different characteristics. This is part of the humorous exchange in the movie that highlights the absurdity of the question.\n",
"\n",
"If you're looking for a somewhat serious answer, ornithologists haven't precisely measured the exact airspeed of an unladen swallow. Different species of swallows fly at different speeds, typically ranging from 20-40 miles per hour during normal flight.\n",
"\n"
]
}
],
"source": [
"# test anthropic\n",
"client = anthropic.Anthropic()\n",
"\n",
"# https://docs.anthropic.com/en/docs/about-claude/models/overview\n",
"anthropic_models = [\n",
" \"claude-opus-4-20250514\",\n",
" \"claude-sonnet-4-20250514\",\n",
" \"claude-3-5-haiku-20241022\",\n",
"]\n",
"\n",
"print(\"Available Claude models:\")\n",
"print(\"\\n\".join(anthropic_models))\n",
"print()\n",
"\n",
"# Try making a simple completion request to each:\n",
"\n",
"message = \"what is the airspeed velocity of an unladen swallow\"\n",
"for model in anthropic_models:\n",
" try:\n",
" response = client.messages.create(\n",
" model=model,\n",
" max_tokens=200,\n",
" messages=[{\"role\": \"user\", \"content\": message}]\n",
" )\n",
" print(f\"✓ {model}\")\n",
" print(response.content[0].text)\n",
" print()\n",
" except Exception as e:\n",
" print(f\"✗ {model} - error: {str(e)}\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e4eb09f9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Available OpenAI models:\n",
"gpt-4o\n",
"gpt-4o-mini\n",
"gpt-4.1\n",
"gpt-4.1-mini\n",
"o3\n",
"\n",
"✓ gpt-4o\n",
"The question \"What is the airspeed velocity of an unladen swallow?\" is a humorous reference from the movie *Monty Python and the Holy Grail*. In the film, it is posed as an absurdly specific and esoteric question.\n",
"\n",
"For a more straightforward answer, we can refer to actual ornithological data: the airspeed velocity of an unladen European swallow (Hirundo rustica) is estimated to be around 11 meters per second, or approximately 24 miles per hour. However, this is a rough estimate, as the actual speed can vary based on factors like wind conditions and the specific activity the swallow is engaged in.\n",
"\n",
"✓ gpt-4o-mini\n",
"The question about the airspeed velocity of an unladen swallow is a humorous reference to the film \"Monty Python and the Holy Grail.\" In a more scientific context, the airspeed velocity of an unladen European swallow (Hirundo rustica) is estimated to be around 11 meters per second, or approximately 24 miles per hour. However, this figure can vary depending on factors such as wind conditions and the specific species of swallow in question.\n",
"\n",
"✓ gpt-4.1\n",
"Ah, a classic question!\n",
"\n",
"If you’re referencing **Monty Python and the Holy Grail**, the *true* answer is, of course:\n",
"> \"What do you mean? An African or European swallow?\"\n",
"\n",
"But for the sake of science, let’s answer:\n",
"\n",
"### European Swallow (*Hirundo rustica*)\n",
"Ornithologists estimate the airspeed velocity of an unladen European Swallow is:\n",
"**About 11 meters per second** (roughly **24 miles per hour** or **39 km/h**) when cruising.\n",
"\n",
"### African Swallow\n",
"There are several species, but they generally have similar flight speeds, though precise values are less documented.\n",
"\n",
"---\n",
"**In summary:** \n",
"> The airspeed velocity of an unladen European Swallow is about **11 m/s** (24 mph).\n",
"\n",
"And now you may safely cross the Bridge of Death!\n",
"\n",
"✓ gpt-4.1-mini\n",
"Ah, the classic question! If you're referring to the line from *Monty Python and the Holy Grail*, the punchline is all about distinguishing between an African and a European swallow.\n",
"\n",
"But to give you a more scientific answer:\n",
"\n",
"- The airspeed velocity of an unladen **European Swallow** (*Hirundo rustica*) is roughly **20 to 25 miles per hour** (about **32 to 40 kilometers per hour**).\n",
"\n",
"This estimate varies depending on factors like wind conditions and exact bird size.\n",
"\n",
"If you want specifics on the African swallow, that’s a bit less commonly documented, but it’s generally assumed to be somewhat similar.\n",
"\n",
"So: \n",
"**What do you mean? An African or a European swallow?** 😄\n",
"\n",
"✓ o3\n",
"“An African or a European swallow?” \n",
"(Monty Python taught us you have to ask!)\n",
"\n",
"If we leave the movie script and look at real‐world numbers:\n",
"\n",
"• European (Barn) Swallow, Hirundo rustica \n",
" – Typical level-flight speed: ≈ 10 m/s (22–24 mph, 35–40 km/h) \n",
" – Short bursts and dives can be faster, but sustained cruising sits around this figure.\n",
"\n",
"• Several African swallow species (e.g., White-throated Swallow, Hirundo albigularis) fly at broadly similar cruising speeds—on the order of 9–11 m/s—because their size, wing loading, and foraging style are comparable.\n",
"\n",
"So the oft-quoted “about 11 m/s (24 mph)” is a reasonable ballpark for an unladen swallow of either variety—at least until it starts carrying coconuts.\n",
"\n"
]
}
],
"source": [
"# test openai\n",
"from openai import OpenAI\n",
"\n",
"client = OpenAI()\n",
"# https://platform.openai.com/docs/models\n",
"openai_models = [\n",
" \"gpt-4o\",\n",
" \"gpt-4o-mini\",\n",
" \"gpt-4.1\",\n",
" \"gpt-4.1-mini\",\n",
"# \"o3\"\n",
"]\n",
"print(\"Available OpenAI models:\")\n",
"print(\"\\n\".join(openai_models))\n",
"print()\n",
"\n",
"# Try making a simple completion request to each:\n",
"message = \"what is the airspeed velocity of an unladen swallow\"\n",
"for model in openai_models:\n",
" try:\n",
" response = client.chat.completions.create(\n",
" model=model,\n",
" messages=[{\"role\": \"user\", \"content\": message}]\n",
" )\n",
" print(f\"✓ {model}\")\n",
" print(response.choices[0].message.content)\n",
" print()\n",
" except Exception as e:\n",
" print(f\"✗ {model} - error: {str(e)}\")\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "ba5067a4",
"metadata": {},
"source": [
"see swallow_server.py\n",
"```\n",
"\"\"\"\n",
"This module swallow_server.py implements a simple MCP server using FastMCP,\n",
"providing a tool unladen_swallow_airspeed, returns a string based on input swallow type\n",
"\"\"\"\n",
"from mcp.server.fastmcp import FastMCP\n",
"from pydantic import Field, BaseModel\n",
"\n",
"# return schema\n",
"class SwallowSpeed(BaseModel):\n",
" speed: str\n",
" unit: str\n",
" swallow_type: str\n",
"\n",
"mcp = FastMCP(\"swallow-server\")\n",
"\n",
"@mcp.tool()\n",
"def unladen_swallow_airspeed(\n",
" swallow_type: str = Field(description=\"Type of swallow: 'african' or 'european'\")\n",
") -> SwallowSpeed:\n",
" \"\"\"Provides the airspeed velocity of an unladen swallow.\"\"\"\n",
" stype = swallow_type.strip().lower()\n",
" if stype == 'african':\n",
" return SwallowSpeed(speed=\"31.1415926\", unit=\"km/h\", swallow_type=\"african\")\n",
" elif stype == 'european':\n",
" return SwallowSpeed(speed=\"27.1828\", unit=\"km/h\", swallow_type=\"european\")\n",
" else:\n",
" return SwallowSpeed(speed=\"I don't know!\", unit=\"\", swallow_type=stype)\n",
"\n",
"\n",
"def main():\n",
" mcp.run()\n",
"\n",
"\n",
"if __name__ == \"__main__\":\n",
" main()\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "711acff3",
"metadata": {},
"source": [
"## Test swallow_server.py using MCP Inspector\n",
"- `$ mcp dev swallow_server.py`\n",
"- click 'connect'\n",
"- click 'tools'\n",
"- click 'unladen_swallow_airspeed' tool\n",
"- enter parameters"
]
},
{
"cell_type": "markdown",
"id": "03ee3f44",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "a706529f",
"metadata": {},
"outputs": [],
"source": [
"MODEL = \"claude-sonnet-4-20250514\"\n",
"\n",
"class MCPClient:\n",
" \"\"\"An MCP client adapted to run in a Jupyter notebook.\n",
" \"\"\"\n",
" def __init__(self, model=MODEL):\n",
" self.session: Optional[ClientSession] = None\n",
" self.exit_stack = AsyncExitStack()\n",
" self.model=model\n",
" if model in anthropic_models:\n",
" self.vendor ='Anthropic'\n",
" self.llm = Anthropic()\n",
" elif model in openai_models:\n",
" self.vendor = 'OpenAI'\n",
" self.llm = OpenAI()\n",
" else:\n",
" print(f\"bad model {model}, try again\")\n",
" self.tools = {}\n",
" self.tools_reverse = {}\n",
"\n",
" def connect_to_server(self, server_script_path: str):\n",
" \"\"\"Connect to an MCP server and list its tools.\"\"\"\n",
" print(f\"Connecting to server: {server_script_path}...\")\n",
" is_python = server_script_path.endswith('.py')\n",
" if not is_python:\n",
" raise ValueError(\"Server script must be a .py file\")\n",
"\n",
" server_params = StdioServerParameters(\n",
" command=sys.executable, # Use the same python executable\n",
" args=[server_script_path],\n",
" env=None\n",
" )\n",
"\n",
" pdb.set_trace()\n",
" # print(server_params)\n",
" response = asyncio.run(self.async_connect_to_server(server_params))\n",
" # print(response)\n",
" self.tools[server_script_path] = response.tools\n",
" reverse_tool_dict = {tool.name: server_script_path for tool in response.tools}\n",
" self.tools_reverse = {**self.tools_reverse, **reverse_tool_dict}\n",
" print(\"\\nConnection successful!\")\n",
" print(\"Available tools:\", [(tool.name, tool.description, tool.inputSchema) for tool in self.tools[server_script_path]])\n",
"\n",
" async def async_connect_to_server(self, server_params):\n",
"\n",
" stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))\n",
" self.stdio, self.write = stdio_transport\n",
" self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))\n",
"\n",
" await self.session.initialize()\n",
" response = await self.session.list_tools()\n",
" return response\n",
"\n",
" def process_query(self, query: str) -> str:\n",
" # TODO: implement process_query_openai, call based on self.vendor\n",
" if self.vendor == \"Anthropic\":\n",
" return self.process_query_anthropic(query)\n",
" elif self.vendor == \"OpenAI\":\n",
" return self.process_query_openai(query)\n",
" \n",
" else:\n",
" return (f\"not implemented\")\n",
"\n",
"\n",
" def process_query_anthropic(self, query: str) -> str:\n",
" \"\"\"Process a query using LLM and the available tools.\"\"\"\n",
" \n",
" if not self.session:\n",
" return \"Error: Not connected to a server. Please run connect_to_server first.\"\n",
"\n",
" pdb.set_trace()\n",
" messages = [{\"role\": \"user\", \"content\": query}]\n",
" available_tools = [{\n",
" \"name\": tool.name,\n",
" \"description\": tool.description,\n",
" \"input_schema\": tool.inputSchema\n",
" } for server in self.tools.values() for tool in server]\n",
" print(f\"Sending query to {self.model}...\")\n",
" response = self.llm.messages.create(\n",
" model=self.model, \n",
" max_tokens=1024,\n",
" messages=messages,\n",
" tools=available_tools\n",
" )\n",
"\n",
" final_text = []\n",
" for content in response.content:\n",
" if content.type == 'text':\n",
" final_text.append(content.text)\n",
" elif content.type == 'tool_use':\n",
" tool_name = content.name\n",
" tool_args = content.input\n",
" print(f\"{self.model} requested to use tool: {tool_name} with arguments: {tool_args}\")\n",
"\n",
" result = asyncio.run(self.session.call_tool(tool_name, tool_args))\n",
" print(f\"Received result from MCP tool: {result.content} \")\n",
"\n",
" # Create the tool result content block\n",
" tool_result_content = {\n",
" \"type\": \"tool_result\",\n",
" \"tool_use_id\": content.id,\n",
" \"content\": str(result.content) # Ensure content is a string\n",
" }\n",
"\n",
" # Append the original assistant message and the tool result\n",
" messages.append({\"role\": \"assistant\", \"content\": response.content})\n",
" messages.append({\"role\": \"user\", \"content\": [tool_result_content]})\n",
"\n",
" # Get next response from LLM\n",
" print(f\"Tool result: {tool_result_content}\")\n",
" print(f\"Sending tool result back to {self.model}...\")\n",
" follow_up_response = self.llm.messages.create(\n",
" model=self.model,\n",
" max_tokens=1024,\n",
" messages=messages,\n",
" )\n",
" for follow_up_content in follow_up_response.content:\n",
" if follow_up_content.type == 'text':\n",
" final_text.append(follow_up_content.text) \n",
"\n",
" return \"\\n\".join(final_text)\n",
"\n",
"\n",
" def process_query_openai(self, query: str) -> str:\n",
" \"\"\"Process a query using LLM and the available tools.\"\"\"\n",
" \n",
" if not self.session:\n",
" return \"Error: Not connected to a server. Please run connect_to_server first.\"\n",
"\n",
" pdb.set_trace()\n",
" messages = [{\"role\": \"user\", \"content\": query}]\n",
" available_tools = [\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": tool.name,\n",
" \"description\": tool.description,\n",
" \"parameters\": tool.inputSchema\n",
" }\n",
" }\n",
" for server in self.tools.values() for tool in server]\n",
" print(f\"Sending query to {self.model}...\")\n",
" response = self.llm.chat.completions.create(\n",
" model=self.model,\n",
" messages=messages,\n",
" tools=available_tools,\n",
" tool_choice=\"auto\", # auto = let the model decide whether to call a tool\n",
" )\n",
" response_message = response.choices[0].message\n",
"\n",
" # Check if the model decided to call a tool\n",
" if response_message.tool_calls:\n",
" tool_call = response_message.tool_calls[0]\n",
" tool_name = tool_call.function.name\n",
" tool_args = json.loads(tool_call.function.arguments)\n",
" print(f\"{self.model} requested to use tool: {tool_name} with arguments: {tool_args}\")\n",
" result = asyncio.run(self.session.call_tool(tool_name, tool_args))\n",
" print(f\"Received result from MCP tool: {result.content} \")\n",
"\n",
" # Append the original assistant message and the tool result\n",
" messages.append({\n",
" \"role\": \"assistant\",\n",
" \"content\": response_message.content, # This might be None for tool calls\n",
" \"tool_calls\": [\n",
" {\n",
" \"id\": tool_call.id,\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": tool_call.function.name,\n",
" \"arguments\": tool_call.function.arguments\n",
" }\n",
" }\n",
" ]\n",
" })\n",
" # Add tool result\n",
" messages.append({\n",
" \"tool_call_id\": tool_call.id,\n",
" \"role\": \"tool\",\n",
" \"name\": tool_name,\n",
" \"content\": json.dumps(result.structuredContent)\n",
" })\n",
" # Send the messages with the tool's output back to the model\n",
" second_response = self.llm.chat.completions.create(\n",
" model=self.model,\n",
" messages=messages,\n",
" )\n",
" second_response_message = second_response.choices[0].message\n",
" return second_response_message.content\n",
" else:\n",
" return response_message.content\n",
"\n",
" return \"\\n\".join(final_text)\n",
"\n",
" def chat_loop(self):\n",
" \"\"\"Run an interactive chat loop\"\"\"\n",
" print(\"\\nMCP Client Started!\")\n",
" print(\"Type your queries or 'quit' to exit.\")\n",
"\n",
" while True:\n",
" try:\n",
" query = input(\"\\nQuery: \").strip()\n",
"\n",
" if query.lower() == 'quit':\n",
" break\n",
"\n",
" response = self.process_query(query)\n",
" print(\"\\n\" + response)\n",
"\n",
" except Exception as e:\n",
" print(f\"\\nError: {str(e)}\")\n",
"\n",
" def cleanup(self):\n",
" \"\"\"Clean up resources and close the server connection.\"\"\"\n",
" print(\"Cleaning up resources...\")\n",
" asyncio.run(self.exit_stack.aclose())\n",
" print(\"Cleanup complete.\")\n"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "a77514d4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connecting to server: swallow_server.py...\n",
"> \u001b[32m/var/folders/6d/3xz907yn5ylg43s2vlnnzptr0000gn/T/ipykernel_19604/148448282.py\u001b[39m(\u001b[92m36\u001b[39m)\u001b[36mconnect_to_server\u001b[39m\u001b[34m()\u001b[39m\n",
"\u001b[32m 34\u001b[39m pdb.set_trace()\n",
"\u001b[32m 35\u001b[39m \u001b[38;5;66;03m# print(server_params)\u001b[39;00m\n",
"\u001b[32m---> 36\u001b[39m response = asyncio.run(self.async_connect_to_server(server_params))\n",
"\u001b[32m 37\u001b[39m \u001b[38;5;66;03m# print(response)\u001b[39;00m\n",
"\u001b[32m 38\u001b[39m self.tools[server_script_path] = response.tools\n",
"\n",
"ipdb> c\n",
"\n",
"Connection successful!\n",
"Available tools: [('unladen_swallow_airspeed', 'Provides the airspeed velocity of an unladen swallow.', {'properties': {'swallow_type': {'description': \"Type of swallow: 'african' or 'european'\", 'title': 'Swallow Type', 'type': 'string'}}, 'required': ['swallow_type'], 'title': 'unladen_swallow_airspeedArguments', 'type': 'object'})]\n"
]
}
],
"source": [
"def connect():\n",
" client = MCPClient(MODEL)\n",
" client.connect_to_server('swallow_server.py')\n",
" return client\n",
"\n",
"# Run the connection and keep the client object\n",
"# This will block until the connection is established.\n",
"client = connect()\n"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "ed97ddd5",
"metadata": {},
"outputs": [],
"source": [
"def run_query(query):\n",
" response = client.process_query(query)\n",
" print(\"\\n--- LLM's Response ---\")\n",
" print(response)\n",
" print(\"-------------------------\")\n"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "a32d4403",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"> \u001b[32m/var/folders/6d/3xz907yn5ylg43s2vlnnzptr0000gn/T/ipykernel_19604/148448282.py\u001b[39m(\u001b[92m72\u001b[39m)\u001b[36mprocess_query_anthropic\u001b[39m\u001b[34m()\u001b[39m\n",
"\u001b[32m 70\u001b[39m \n",
"\u001b[32m 71\u001b[39m pdb.set_trace()\n",
"\u001b[32m---> 72\u001b[39m messages = [{\u001b[33m\"role\"\u001b[39m: \u001b[33m\"user\"\u001b[39m, \u001b[33m\"content\"\u001b[39m: query}]\n",
"\u001b[32m 73\u001b[39m available_tools = [{\n",
"\u001b[32m 74\u001b[39m \u001b[33m\"name\"\u001b[39m: tool.name,\n",
"\n",
"ipdb> c\n",
"Sending query to claude-sonnet-4-20250514...\n",
"claude-sonnet-4-20250514 requested to use tool: unladen_swallow_airspeed with arguments: {'swallow_type': 'african'}\n",
"Received result from MCP tool: [TextContent(type='text', text='{\\n \"speed\": \"31.1415926\",\\n \"unit\": \"km/h\",\\n \"swallow_type\": \"african\"\\n}', annotations=None, meta=None)] \n",
"Tool result: {'type': 'tool_result', 'tool_use_id': 'toolu_01K88EGAVBTWtaP2rk5QTa8i', 'content': '[TextContent(type=\\'text\\', text=\\'{\\\\n \"speed\": \"31.1415926\",\\\\n \"unit\": \"km/h\",\\\\n \"swallow_type\": \"african\"\\\\n}\\', annotations=None, meta=None)]'}\n",
"Sending tool result back to claude-sonnet-4-20250514...\n",
"\n",
"--- LLM's Response ---\n",
"The airspeed velocity of an unladen African swallow is approximately 31.14 km/h.\n",
"\n",
"(This is, of course, a reference to the famous question from \"Monty Python and the Holy Grail\" - in reality, the average flight speed of barn swallows, which are found in Africa, is typically around 30-35 km/h or 17-20 mph during regular flight.)\n",
"-------------------------\n"
]
}
],
"source": [
"# Run a query\n",
"query = \"What is the airspeed velocity of an unladen african swallow?\"\n",
"run_query(query)\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "dc502dfe",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"MCP Client Started!\n",
"Type your queries or 'quit' to exit.\n",
"\n",
"Query: what is the airspeed velocity of an unladen American swallow?\n",
"> \u001b[32m/var/folders/6d/3xz907yn5ylg43s2vlnnzptr0000gn/T/ipykernel_19604/148448282.py\u001b[39m(\u001b[92m72\u001b[39m)\u001b[36mprocess_query_anthropic\u001b[39m\u001b[34m()\u001b[39m\n",
"\u001b[32m 70\u001b[39m \n",
"\u001b[32m 71\u001b[39m pdb.set_trace()\n",
"\u001b[32m---> 72\u001b[39m messages = [{\u001b[33m\"role\"\u001b[39m: \u001b[33m\"user\"\u001b[39m, \u001b[33m\"content\"\u001b[39m: query}]\n",
"\u001b[32m 73\u001b[39m available_tools = [{\n",
"\u001b[32m 74\u001b[39m \u001b[33m\"name\"\u001b[39m: tool.name,\n",
"\n",
"ipdb> c\n",
"Sending query to claude-sonnet-4-20250514...\n",
"\n",
"I can help you find the airspeed velocity of an unladen swallow, but I need to clarify something. The function I have access to can provide information for \"african\" or \"european\" swallows, but you asked about an \"American\" swallow.\n",
"\n",
"Could you please specify whether you'd like the airspeed velocity for an African or European swallow instead? These are the two types I can look up for you.\n",
"\n",
"Query: Query: what is the airspeed velocity of an unladen European swallow?\n",
"> \u001b[32m/var/folders/6d/3xz907yn5ylg43s2vlnnzptr0000gn/T/ipykernel_19604/148448282.py\u001b[39m(\u001b[92m72\u001b[39m)\u001b[36mprocess_query_anthropic\u001b[39m\u001b[34m()\u001b[39m\n",
"\u001b[32m 70\u001b[39m \n",
"\u001b[32m 71\u001b[39m pdb.set_trace()\n",
"\u001b[32m---> 72\u001b[39m messages = [{\u001b[33m\"role\"\u001b[39m: \u001b[33m\"user\"\u001b[39m, \u001b[33m\"content\"\u001b[39m: query}]\n",
"\u001b[32m 73\u001b[39m available_tools = [{\n",
"\u001b[32m 74\u001b[39m \u001b[33m\"name\"\u001b[39m: tool.name,\n",
"\n",
"ipdb> c\n",
"Sending query to claude-sonnet-4-20250514...\n",
"claude-sonnet-4-20250514 requested to use tool: unladen_swallow_airspeed with arguments: {'swallow_type': 'european'}\n",
"Received result from MCP tool: [TextContent(type='text', text='{\\n \"speed\": \"27.1828\",\\n \"unit\": \"km/h\",\\n \"swallow_type\": \"european\"\\n}', annotations=None, meta=None)] \n",
"Tool result: {'type': 'tool_result', 'tool_use_id': 'toolu_0155kAFEPbaVxETy1Wm2vrzQ', 'content': '[TextContent(type=\\'text\\', text=\\'{\\\\n \"speed\": \"27.1828\",\\\\n \"unit\": \"km/h\",\\\\n \"swallow_type\": \"european\"\\\\n}\\', annotations=None, meta=None)]'}\n",
"Sending tool result back to claude-sonnet-4-20250514...\n",
"\n",
"I'll help you find the airspeed velocity of an unladen European swallow!\n",
"According to my calculations, the airspeed velocity of an unladen European swallow is approximately **27.18 km/h** (about 16.9 mph).\n",
"\n",
"Of course, this is a reference to the famous question from \"Monty Python and the Holy Grail\"! In the movie, this question stumps the Bridge Keeper. But if we're being scientific about it, European swallows (likely referring to barn swallows, which are common in Europe) typically fly at cruising speeds in this range, though they can fly much faster when diving or in other flight modes.\n",
"\n",
"*What do you mean? An African or European swallow?* 🦅\n",
"\n",
"Query: quit\n"
]
}
],
"source": [
"# Run a chat loop\n",
"client.chat_loop()\n"
]
},
{
"cell_type": "markdown",
"id": "4217df77",
"metadata": {},
"source": [
"# Claude as MCP Client\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"id": "346be417",
"metadata": {},
"source": [
"### Make an equity research report using tools\n",
"\n",
"- Write custom Python tools in [server.py](https://github.com/druce/MCP/blob/master/server.py)\n",
"- Configure this custom MCP server and others for the Claude Desktop client using [claude_desktop_config.json](https://github.com/druce/MCP/blob/master/claude_desktop_config.json) . Some tools may needs command line configs, API secrets. \n",
"- We can first send some Claude prompts to ensure certain information is in the context, then call a Deep Research prompt that uses the information in the context, retrieved using tools and deep research, to write a report according to a complex structure in the prompt.\n",
"- [Example series of prompts](https://claude.ai/share/9a679e68-469b-48b2-895a-628d933b64d9)\n",
"- [Example report](https://claude.ai/public/artifacts/736116c9-cde1-4d47-aca8-be78abfde6ab)\n",
"\n",
"\n",
"- This notebook in GitHub\n",
" - [https://github.com/druce/MCP/blob/master/Simple%20MCP%20demo.ipynb](https://github.com/druce/MCP/blob/master/Simple%20MCP%20demo.ipynb)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b53b1ee7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "mcptest",
"language": "python",
"name": "mcptest"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}